In November, the European Commission unveiled the long-awaited Democracy Shield initiative, designed to counter threats such as disinformation and foreign interference. While it acknowledges the systemic risk European democracy faces, the Shield falls short of addressing its root causes: engagement-based algorithms, Big Tech dominance, geopolitical dependency, and deep divisions that make Europe vulnerable to polarising messages. An interview with Alexandra Geese, Green Member of the European Parliament.
Alice Stollmeyer: To start from something positive, what did you find convincing in the European Democracy Shield presented by the Commission?
Alexandra Geese: The analysis of the problem is compelling. It is a clear acknowledgement that the way algorithms work – engagement-based ranking and manual manipulation – poses a systemic risk to civic discourse and electoral processes, and this is now stated in a Commission document. This analysis therefore goes beyond simply blaming malevolent actors; it recognises that the threat we face is a combination of a systemic risk and bad actors taking advantage of that system.
Based on Article 34 of the Digital Services Act (DSA), the Commission should be forcing digital platforms to take mitigating measures. Instead, it chooses not to do anything about it, and this is the worrying part.
The Shield still frames the threats as mostly foreign, largely ignoring internal dangers like democratic backsliding, the rise of the far right, attacks on civil society, corruption, media capture, shrinking civil space –you name it. How do you explain these blind spots?
It’s a political choice. The Commission acknowledges there’s a risk to democracy, but it doesn’t name the actors behind this risk because they are powerful. Far-right parties sit in the European Parliament, and one of the reasons why civil space is shrinking is that the [centre-right] European People’s Party (EPP), where the Commission’s own political majority comes from, actively campaigns against NGOs. There is also some reticence to tackle the problematic influence of US tech companies boosting disinformation and extremist political actors. The Trump administration has been threatening the EU Commission and unfortunately the Commission President Ursula von der Leyen has given in to the blackmail.
If the Commission, the guardian of the Treaties, won’t be bold and outspoken in protecting European values like democracy and the rule of law, might the European Parliament step up its role?
We made a series of suggestions, even though we need to remember that the Parliament also has political majorities.
First, we are asking the Commission to act on its recognition that algorithms represent a systemic risk. With the DSA, we have a legal framework that is a perfect fit for that. As Greens, we’re also asking for substantial funding for alternative social media platforms. If you want to establish a platform with a business model that is not dependent on tech billionaires, not reliant on surveillance advertising, and not based on toxic algorithms, you must be ready to put a few million euros of seed funding on the table. We need to build something sustainable that strengthens democracy rather than weakening it. The Democracy Shield has only a few passages about European social media platforms, and they are very vague. There are a number of initiatives out there that we could build on, and the Commission should be talking to them and helping them get off the ground.
The Commission acknowledges there’s a risk to democracy, but it doesn’t name the actors behind this risk because they are powerful.
1We need a different advertising model because all revenue is going to Google and Meta, while the press gasps for air. These are the very same companies using algorithms that are toxic for democracy. There should also be an initiative by the Commission to talk to publishers and advertisers, and see what a different, European business model of advertising could look like, but I’m not sure we have a majority for that in the European Parliament.
There’s more consensus on the importance of countering foreign interference and being more transparent about the risks it poses. In Germany, for example, there have been rumours that acts of violence before the most recent federal election were instigated by foreign actors and stochastic terrorism [whereby the incendiary rhetoric of public figures inspires acts of violence]. But not much about this has been made public so far.
Would the EPP be in favour of more transparency about foreign threats?
It is in their interest, because they showed that they very much care about direct foreign influence and hybrid warfare.
From my perspective, the main focus should be on reining in toxic algorithms, because if you do that, you tackle the problem at its roots. If you can no longer target groups of people who are particularly vulnerable and receptive to a certain kind of radicalising content, you can’t achieve your destabilisation goals so easily. But if we don’t have agreement on that because nobody wants to anger President Trump, which is the elephant in the room right now, at least being more transparent about these malign operations would be a positive first step.
You’ve already mentioned the need to tackle engagement-based ranking as the root of the problem. How would that look in practice?
First, we need an investigation into how engagement-based ranking contributes to, amplifies, and incentivises disinformation, not only political but also commercial. Second, as a mitigating measure, the Commission should ban engagement-based ranking.
The platforms argue that interaction is the user’s choice, but this narrative is profoundly wrong. If I’m attacked in the street, as users are systematically attacked by polarising content and disinformation, I might react. But I never chose to be attacked in the first place. So we should move away from engagement and towards real user choice. That’s the platforms’ job to figure out how to implement it, and there’s a lot of expertise out there from people who used to work on trust and integrity teams at these companies. In an environment of real user choice, some people are still going to consume media that we might consider as disinformation. But I think it would be a small minority.
We should move away from engagement and towards real user choice.
Third, we must acknowledge that Elon Musk, in particular, is actively manipulating the algorithm to push polarising political content on X. There are now solid studies consistently showing that. In Germany, X has amplified far-right and pro-Putin content, which in the German context is also popular on the far left. In the UK, it has favoured Rupert Lowe, a far-right extremist who’s even more radical than Nigel Farage. And in Poland, X has propelled the rise of Konfederacja, a party that sits to the right of Law and Justice. It’s clear that Musk picks individual politicians and increases their reach. Here, the criterion is not even engagement or the number of followers. It’s a clear example of political manipulation that the Commission cannot ignore.
Aside from reining in disruptive algorithms, what’s the Greens’ perspective on addressing Big Tech dominance over key infrastructure?
The DSA very clearly says you cannot have sensitive data in your targeting profile. The next step should have been a piece of legislation that extends those provisions from platforms to all data brokers. Instead, the Commission’s Digital Omnibus is going in the opposite direction: as long as a data broker doesn’t hold the technology to trace the data back to the individual, it’s perfectly fine to keep sensitive data and sell it to others. The intention is to make this legitimate for the purpose of AI training, so that people can’t protest against it anymore. The Democracy Shield is not calling this out or addressing it in any way, and that’s an absolute scandal. The targeting and surveillance capacities of these companies are one of the reasons why our democracy is going down the drain.
Another key aspect is enforcing the Digital Markets Act (DMA), which would address competition and antitrust action. What Google is doing right now with its “AI mode” and “AI overview” is killing the free press, and the Commission is not addressing that in a meaningful way. Its remedy is to set up funds for the media, a move that the far right is already attacking as support for “leftist” outlets. If you give public funds to specific media, you have to justify how you allocate them. And that’s not an easy task. Instead, the Commission should sustain a European social platform, some kind of media aggregator that enables a sustainable business model for journalists. We need a structural shift, not stopgap measures that lack any sense of urgency. Some parts of Eastern Europe are already media deserts, and even in my country, Germany, publishers are desperate.
Emmanuel Macron has recently said that we have been incredibly naive in entrusting our democratic space to American and Chinese platforms whose interests are not at all the survival or proper functioning of our democracies. Is the European Democracy Shield missing a geopolitical opportunity to push for digital sovereignty?
Yes, absolutely. There’s no analysis of the problem. For example, if Trump’s sanctions on the International Criminal Court are not an attack on our democratic institutions and the rule of law, I don’t know what is. The Department of State is basically saying that the US wants pro-democratic partners out of government in Europe, ushering in the far right’s takeover. Yet none of this is mentioned in the Democracy Shield, including the attacks on the DSA. The EU should make clear, in a diplomatic way, that it’s not going to tolerate this kind of interference in its internal affairs and the rule of law.
Not addressing this is a missed opportunity, and it gives Trump a legitimacy he no longer enjoys in his own country. The Commission is treating him as if he were going to be there forever, and the alliance between tech companies and the MAGA movement will last forever. But in the US, there’s very little support for Big Tech, Trump’s approval rates are at record lows, and even Republicans who are up for re-election in the midterms in 2026 are starting to look at Trump as a liability. Americans are extremely unhappy with the level of polarisation that the political debate has reached, to the point of not being able to talk to their neighbours anymore.
That is exactly what we do not want to happen in Europe. When I hear European politicians talking about “simplification” to resemble the US model more, I am deeply concerned. We do want innovation and technology in Europe, but we want it to be democratic. Giving away our data and renouncing democratic values is not where we want to go.
Do you see any European leaders ready to stand up for these values?
Macron and Sánchez [Spain’s prime minister] are doing it, but not enough. I have very high hopes for the new Dutch government, but let’s see how coalition negotiations evolve. I am less optimistic about Germany because the new government seems set to fold completely. We also need Eastern European governments to take a strong position on this.
Russia’s full-scale invasion of Ukraine constituted an epochal shift (what Germans call Zeitenwende) because it made us aware that the peace Europe took for granted wasn’t forever, and that we need to be prepared for external aggression. The 28-point plan presented by Trump represents another game-changer. Before, the blackmail was that if Europe enforced its digital legislation to protect its democracy, the US would cease support for Ukraine, pull its troops out of NATO, and raise tariffs. Faced with that choice, the argument we used to hear in Europe was that it’s more important to defend the territory of Ukraine by securing continued US military support than to protect our democratic space. Now it’s clear that the two things are no longer in opposition because the US has openly sided with Putin.
Some European governments have already said they don’t support the plan. The second step should be to protect our democratic space, so that Russia doesn’t win the hearts and minds of our people. It’s a lot easier to swing an election if you’re good at manipulating the digital space than to win a war on the battlefield. For Russia, it’s so much easier to bring Putin-friendly parties into power than to win a war against NATO.
Essentially, what you are saying is that when you’re the target of hybrid warfare, regulating platforms is not just about protecting democracy – it’s also about security.
In the Kremlin’s hybrid warfare toolbox, disinformation or information manipulation is only one of many tools alongside corruption, economic coercion, cyber attacks, and sabotage, as we have seen recently. However, in the Democracy Shield communication, the emphasis of the Commission’s measures is mainly on fact-checking. What’s your take on that?
It feels like being back in 2018, when we still believed that giving people access to the facts would solve the problem of disinformation. However, all evidence shows the opposite: fact-checking is not effective in countering disinformation. I’m not opposed to verifying facts, but I see it as part of what journalists already do when they can do their job properly. As long as we don’t change the algorithms, disinformation will continue to reach more people than quality information is able to. Investing in fact-checking and media literacy often becomes an excuse for not doing anything else. This becomes even more problematic when fact-checking is funded by the same tech companies that enable disinformation to spread.
For tech companies, it is also a convenient way of shifting responsibility to fight disinformation onto individual users.
Exactly. A good example of this is media literacy, another Commission favourite that is often coupled with fact-checking. It’s the idea that people have to check their sources and be able to distinguish true from false content. But today, most people only read headlines on social media, and this is what stays in their heads. Only a small elite have the time to delve deeper and verify the sources, and even then, this is rarely effective.
Investing in fact-checking and media literacy often becomes an excuse for not doing anything else.
You have this paradox where companies making hundreds of billions in profits shift the burden of fighting disinformation onto individuals who are already struggling with rising housing costs – for example, by making them pay twice: once with their data, and once with the time it takes to figure out what is reliable information. And if it’s not individuals, it’s school systems that are asked to provide media literacy education. But schools don’t have resources either, and in any case, this doesn’t address the psychological and emotional dynamics behind the spread of disinformation. Research shows that users are likely to see disinformation six to 10 times more often than factual information, and there’s evidence that this is enough to make disinformation “truer” than facts for our brains.
I’m also worried about a conflict of interest. The European Digital Media Observatory, which maps and coordinates anti-disinformation activities across Europe, receives funding from the Commission but also has a decision-making role in the European Media and Information Fund (EMIF), which received 25 million euros from Google. I sent a written question to the Commission about this, and the response was that there is no conflict of interest. But I think that the organisation that coordinates European fact-checking networks should be very far away from any kind of Big Tech money.
What’s your take on the proposal, mentioned in the Democracy Shield, to set up a European Centre for Democratic Resilience to withstand common threats such as disinformation and election interference?
It’s a good idea for anti-disinformation initiatives from different countries to talk to each other because disinformation campaigns are global in nature. But I’m not sure how much willingness there is in European countries. And I wonder whether setting up another organisation is the most urgent thing to do, especially if it has no power to act on its findings.
But it is essential to learn from each other. Romania had to rerun its presidential elections because bot networks were built, influencers were paid, certain cultural grievances were used to boost an extreme-right candidate, and then engagement-based ranking did the rest. TikTok failed to detect this operation, while it downgraded other political content on the platform. It’s a modus operandi we are seeing in many countries. If Romanian authorities could share their experiences and conclusions in a meaningful way, we could find ways to counter these operations. For example, we could turn off engagement-based ranking 90 days before elections. My fear is that the Commission, for the same reasons it doesn’t take action itself, is going to say that individual countries can’t turn off engagement-based ranking because that’s its competence under the DSA. But we could learn a lot from what happened in Romania, Moldova, Poland, and other Eastern European countries that are at the forefront of this hybrid war.
Footnotes
- 1
Google has been fined by the Commission in September for breaching EU antitrust rules by distorting competition in the advertising technology industry, favouring its own online advertising technology services to the detriment of competing providers and online publishers.